Conditionally Independent Component Extraction for Naive Bayes Inference
نویسنده
چکیده
This paper extends the framework of independent component analysis (ICA) to supervised learning. The key idea is to find a conditionally independent representation of input variables for given output. The representation is useful for the naive Bayes learning which has been reported to perform as well as more sophisticated methods. The learning algorithm is derived in a similar criterion to ICA. Two dimensional entropy takes an important role, while one dimensional entropy does in ICA.
منابع مشابه
Invited Paper Compound Decision Theory and Empirical Bayes Methods
1. Introduction. Compound decision theory and empirical Bayes methodology , acclaimed as " two breakthroughs " by Neyman (1962), are the most important contributions of Herbert Robbins to statistics. The purpose of this paper is to provide a brief description of his work in these two intimately connected fields, its impact and a number of important related developments. Robbins introduced compo...
متن کاملA New Approach for Text Documents Classification with Invasive Weed Optimization and Naive Bayes Classifier
With the fast increase of the documents, using Text Document Classification (TDC) methods has become a crucial matter. This paper presented a hybrid model of Invasive Weed Optimization (IWO) and Naive Bayes (NB) classifier (IWO-NB) for Feature Selection (FS) in order to reduce the big size of features space in TDC. TDC includes different actions such as text processing, feature extraction, form...
متن کاملGeometric Implications of the Naive Bayes Assumption
A Naive (or Idiot) Bayes network is a network with a single hypothesis node and several observations that are conditionally independent given the hypothesis. We recently surveyed a number of members of the UAI community and discovered a general lack of understanding of the implications of the Naive Bayes assumption on the kinds of problems that can be solved by these networks. It has long been ...
متن کاملAn Ensemble of Bayesian Networks for Multilabel Classification
We present a novel approach for multilabel classification based on an ensemble of Bayesian networks. The class variables are connected by a tree; each model of the ensemble uses a different class as root of the tree. We assume the features to be conditionally independent given the classes, thus generalizing the naive Bayes assumption to the multiclass case. This assumption allows us to optimall...
متن کاملLearning the naive Bayes classifier with optimization models
Naive Bayes is among the simplest probabilistic classifiers. It often performs surprisingly well in many real world applications, despite the strong assumption that all features are conditionally independent given the class. In the learning process of this classifier with the known structure, class probabilities and conditional probabilities are calculated using training data, and then values o...
متن کامل